11-20-2022
In this section, I am going to use osmnx to analyze the crash incidence in Center City in Philadelphia.
In this section, I will visualize a geospatial data set, queried using an API, using Folium in Python.
http://data.phl.opendata.arcgis.com/datasets/0960ea0f38f44146bb562f2b212075aa_0.geojson
import json
import folium
import requests
import carto2gpd
import matplotlib
import osmnx as ox
import numpy as np
import pandas as pd
import altair as alt
import networkx as nx
import geopandas as gpd
import matplotlib.pyplot as plt
from IPython.display import IFrame
from folium.plugins import HeatMap
import matplotlib.colors as mcolors
# get districts data
districts = gpd.read_file('http://data.phl.opendata.arcgis.com/datasets/0960ea0f38f44146bb562f2b212075aa_0.geojson')
districts.head()
| OBJECTID_1 | OBJECTID | DIST_NAME | ABBREV | Shape__Area | Shape__Length | PlanningDist | DaytimePop | geometry | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 14 | River Wards | RW | 2.107270e+08 | 66931.595020 | None | None | POLYGON ((-75.09798 40.00496, -75.09687 40.005... |
| 1 | 2 | 3 | North Delaware | NDEL | 2.700915e+08 | 89213.074378 | None | None | POLYGON ((-74.98159 40.05363, -74.98139 40.053... |
| 2 | 3 | 0 | Lower Far Northeast | LFNE | 3.068529e+08 | 92703.285159 | None | None | POLYGON ((-74.96443 40.11728, -74.96434 40.117... |
| 3 | 4 | 9 | Central | CTR | 1.782880e+08 | 71405.143450 | None | None | POLYGON ((-75.14791 39.96733, -75.14715 39.967... |
| 4 | 5 | 10 | University Southwest | USW | 1.296468e+08 | 65267.676141 | None | None | POLYGON ((-75.18742 39.96338, -75.18644 39.963... |
select = districts['DIST_NAME'].isin(['Central'])
central = districts.loc[select]
central.head()
| OBJECTID_1 | OBJECTID | DIST_NAME | ABBREV | Shape__Area | Shape__Length | PlanningDist | DaytimePop | geometry | |
|---|---|---|---|---|---|---|---|---|---|
| 3 | 4 | 9 | Central | CTR | 1.782880e+08 | 71405.14345 | None | None | POLYGON ((-75.14791 39.96733, -75.14715 39.967... |
type(central)
geopandas.geodataframe.GeoDataFrame
central_outline = central.geometry.unary_union
central_outline
type(central_outline)
shapely.geometry.polygon.Polygon
Now I have a polygon variable central_outline of type shapely.geometry.polygon.Polygon.
Use OSMnx to create a network graph (of type 'drive') from polygon boundary in 1.1.
# get the graph
G = ox.graph_from_polygon(central_outline, network_type='drive')
ox.plot_graph(ox.project_graph(G),
node_size=2,
bgcolor="#8bb396",
edge_color='white'
);
Use OSMnx to create a GeoDataFrame of the network edges in the graph object from part 1.2.
The GeoDataFrame should contain the edges but not the nodes from the network.
type(G)
networkx.classes.multidigraph.MultiDiGraph
# only get the edges
central_edges = ox.graph_to_gdfs(G,
edges=True,
nodes=False)
# we have lots of data associated with each edge!
central_edges.head()
| osmid | oneway | name | highway | reversed | length | geometry | lanes | maxspeed | tunnel | bridge | ref | width | service | access | junction | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| u | v | key | ||||||||||||||||
| 109727439 | 109911666 | 0 | 132508434 | True | Bainbridge Street | residential | False | 44.137 | LINESTRING (-75.17104 39.94345, -75.17053 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 109727448 | 109727439 | 0 | 12109011 | True | South Colorado Street | residential | False | 109.484 | LINESTRING (-75.17125 39.94248, -75.17120 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 110034229 | 0 | 12159387 | True | Fitzwater Street | residential | False | 91.353 | LINESTRING (-75.17125 39.94248, -75.17137 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | |
| 109727507 | 110024052 | 0 | 193364514 | True | Carpenter Street | residential | False | 53.208 | LINESTRING (-75.17196 39.93973, -75.17134 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 109728761 | 110274344 | 0 | 672312336 | True | Brown Street | residential | False | 58.270 | LINESTRING (-75.17317 39.96951, -75.17250 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
# plot it like any old GeoDataFrame
ax3 = central_edges.to_crs(epsg=2272).plot(color='white', linewidth=0.5)
# add the central area boundary
boundary = gpd.GeoSeries([central_outline], crs='EPSG:4326')
boundary.to_crs(epsg=2272).plot(ax=ax3, facecolor='#8bb396', edgecolor='black', linewidth=1, zorder=1)
ax3.set_axis_off()
type(boundary)
geopandas.geoseries.GeoSeries
crash = pd.read_csv('data/CRASH_PHILADELPHIA_2021.csv')
type(crash)
pandas.core.frame.DataFrame
In this step, I'm gonna convert crash to crash_geo with geometry. The columns I need to use are DEC_LAT and DEC_LONG for latitude and longitude. The full data dictionary for the data is available here
crash_geo = gpd.GeoDataFrame(crash, geometry=gpd.points_from_xy(crash.DEC_LONG, crash.DEC_LAT))
type(crash_geo)
geopandas.geodataframe.GeoDataFrame
In this step, I need to create a new central area geodataframe:
.geometry.unary_union.convex_hull property will get a nice outer boundary region.center_area = boundary.geometry.unary_union.convex_hull
type(center_area)
shapely.geometry.polygon.Polygon
within() function of the crash GeoDataFrame to find which crashes are within the boundary.points = crash_geo.within(center_area)
points
0 False
1 False
2 True
3 False
4 False
...
10458 False
10459 False
10460 False
10461 True
10462 False
Length: 10463, dtype: bool
crash_central = crash_geo.loc[points].copy()
crash_central.reset_index(inplace=True, drop=True)
len(crash_central)
1411
In this step, I need to find the nearest edge (street) in the graph for each crash. To do this, osmnx will calculate the distance from each crash to the graph edges. For this calculation to be accurate, I need to convert from latitude/longitude. I will convert the local state plane CRS for Philadelphia, EPSG=2272
G) using the ox.project_graph.G_2272 = ox.project_graph(G, to_crs='epsg:2272')
G_2272.graph['crs']
<Derived Projected CRS: EPSG:2272> Name: NAD83 / Pennsylvania South (ftUS) Axis Info [cartesian]: - X[east]: Easting (US survey foot) - Y[north]: Northing (US survey foot) Area of Use: - name: United States (USA) - Pennsylvania - counties of Adams; Allegheny; Armstrong; Beaver; Bedford; Berks; Blair; Bucks; Butler; Cambria; Chester; Cumberland; Dauphin; Delaware; Fayette; Franklin; Fulton; Greene; Huntingdon; Indiana; Juniata; Lancaster; Lawrence; Lebanon; Lehigh; Mifflin; Montgomery; Northampton; Perry; Philadelphia; Schuylkill; Snyder; Somerset; Washington; Westmoreland; York. - bounds: (-80.53, 39.71, -74.72, 41.18) Coordinate Operation: - name: SPCS83 Pennsylvania South zone (US Survey feet) - method: Lambert Conic Conformal (2SP) Datum: North American Datum 1983 - Ellipsoid: GRS 1980 - Prime Meridian: Greenwich
.to_crs() function.# the reason I set 4326 crs first is that the dataframe needs to get a projection to avoild value error.
# or else I was told that it *cannot transform naive geometries*
crash_central.crs = 'epsg:4326'
#then I set the proper crs for dataframe
crash_central.to_crs(epsg=2272)
crash_central2272 = crash_central.to_crs(epsg=2272)
crash_central2272.crs
<Derived Projected CRS: EPSG:2272> Name: NAD83 / Pennsylvania South (ftUS) Axis Info [cartesian]: - X[east]: Easting (US survey foot) - Y[north]: Northing (US survey foot) Area of Use: - name: United States (USA) - Pennsylvania - counties of Adams; Allegheny; Armstrong; Beaver; Bedford; Berks; Blair; Bucks; Butler; Cambria; Chester; Cumberland; Dauphin; Delaware; Fayette; Franklin; Fulton; Greene; Huntingdon; Indiana; Juniata; Lancaster; Lawrence; Lebanon; Lehigh; Mifflin; Montgomery; Northampton; Perry; Philadelphia; Schuylkill; Snyder; Somerset; Washington; Westmoreland; York. - bounds: (-80.53, 39.71, -74.72, 41.18) Coordinate Operation: - name: SPCS83 Pennsylvania South zone (US Survey feet) - method: Lambert Conic Conformal (2SP) Datum: North American Datum 1983 - Ellipsoid: GRS 1980 - Prime Meridian: Greenwich
ox.distance.nearest_edges()in this step, which takes three arguments:x attribute of the geometry column)y attribute of the geometry column)nearest = ox.distance.nearest_edges(G_2272, X=crash_central2272['geometry'].x, Y=crash_central2272['geometry'].y)
(u, v, key) where each u and v are the node IDs that the edge links together. And also I will ignore the key value in the next analysis.nearest_array = np.array(nearest)
nearest_df = pd.DataFrame(nearest_array, columns = ['u','v','key'])
nearest_df
| u | v | key | |
|---|---|---|---|
| 0 | 1903608761 | 109775193 | 0 |
| 1 | 8482829382 | 7065714513 | 0 |
| 2 | 7384791791 | 775424603 | 0 |
| 3 | 2043885897 | 5879856051 | 0 |
| 4 | 109739559 | 110225812 | 0 |
| ... | ... | ... | ... |
| 1406 | 5562239293 | 1888418037 | 0 |
| 1407 | 109799565 | 109799300 | 0 |
| 1408 | 109784213 | 109789845 | 0 |
| 1409 | 7738709997 | 1479201367 | 0 |
| 1410 | 109770355 | 109770169 | 0 |
1411 rows × 3 columns
Here are the changes in this section:
u, v, and key (we will only use the u and v columns).u and v and calculate the sizesize() column as crash_countnearest_count = nearest_df.groupby(['u','v']).count().reset_index().rename(columns={'key':'crash_count'})
nearest_count
| u | v | crash_count | |
|---|---|---|---|
| 0 | 109729474 | 3425014859 | 2 |
| 1 | 109729486 | 109729699 | 1 |
| 2 | 109729486 | 110342146 | 2 |
| 3 | 109729699 | 109811674 | 2 |
| 4 | 109729709 | 109729731 | 3 |
| ... | ... | ... | ... |
| 496 | 7738710013 | 110207010 | 3 |
| 497 | 8482829382 | 7065714513 | 71 |
| 498 | 8553549495 | 109811783 | 1 |
| 499 | 9244049904 | 634779465 | 2 |
| 500 | 9518379104 | 110318202 | 25 |
501 rows × 3 columns
Then I get a DataFrame with three columns: u, v, and crash_count.
I use pandas to merge them on the u and v columns. This will associate the total crash count with each edge in the street network.
Tips:
left merge where the first argument of the merge is the edges GeoDataFrame. This ensures no edges are removed during the merge.fillna(0) function to fill in missing crash count values with zero.central_crash_count = pd.merge(central_edges, nearest_count, how='left', on=['u','v'])
central_crash_count.head()
| u | v | osmid | oneway | name | highway | reversed | length | geometry | lanes | maxspeed | tunnel | bridge | ref | width | service | access | junction | crash_count | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 109727439 | 109911666 | 132508434 | True | Bainbridge Street | residential | False | 44.137 | LINESTRING (-75.17104 39.94345, -75.17053 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 1 | 109727448 | 109727439 | 12109011 | True | South Colorado Street | residential | False | 109.484 | LINESTRING (-75.17125 39.94248, -75.17120 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 2 | 109727448 | 110034229 | 12159387 | True | Fitzwater Street | residential | False | 91.353 | LINESTRING (-75.17125 39.94248, -75.17137 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 3 | 109727507 | 110024052 | 193364514 | True | Carpenter Street | residential | False | 53.208 | LINESTRING (-75.17196 39.93973, -75.17134 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 4 | 109728761 | 110274344 | 672312336 | True | Brown Street | residential | False | 58.270 | LINESTRING (-75.17317 39.96951, -75.17250 39.9... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
Let's calculate a "crash index" that provides a normalized measure of the crash frequency per street.
length columncentral_crash_count['divide_index'] = central_crash_count['crash_count']/central_crash_count['length']
log10() functioncentral_crash_count['log10_index'] = central_crash_count['divide_index'].apply(np.log10)
Note: since the crash index involves a log transformation, you should only calculate the index for streets where the crash count is greater than zero.
# Minimum
min_val = central_crash_count['log10_index'].min()
# Maximum
max_val = central_crash_count['log10_index'].max()
# Calculate a normalized column
normalized = (central_crash_count['log10_index'] - min_val) / (max_val - min_val)
# Add to the dataframe
central_crash_count['index_normalized'] = normalized
Now I have a new column in the data frame from 1.9 that includes a column called part 1.9.
central_crash_count.head()
| u | v | osmid | oneway | name | highway | reversed | length | geometry | lanes | ... | bridge | ref | width | service | access | junction | crash_count | divide_index | log10_index | index_normalized | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 109727439 | 109911666 | 132508434 | True | Bainbridge Street | residential | False | 44.137 | LINESTRING (-75.17104 39.94345, -75.17053 39.9... | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 1 | 109727448 | 109727439 | 12109011 | True | South Colorado Street | residential | False | 109.484 | LINESTRING (-75.17125 39.94248, -75.17120 39.9... | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 2 | 109727448 | 110034229 | 12159387 | True | Fitzwater Street | residential | False | 91.353 | LINESTRING (-75.17125 39.94248, -75.17137 39.9... | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 3 | 109727507 | 110024052 | 193364514 | True | Carpenter Street | residential | False | 53.208 | LINESTRING (-75.17196 39.93973, -75.17134 39.9... | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| 4 | 109728761 | 110274344 | 672312336 | True | Brown Street | residential | False | 58.270 | LINESTRING (-75.17317 39.96951, -75.17250 39.9... | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
5 rows × 22 columns
Use matplotlib's hist() function to plot the crash index values from the previous step. As we can see in the histogram, the index values are Gaussian-distributed, providing justification for why I log-transformed! A matplotlib color map in the next step requires data to be between 0 and 1 thus I need to normalize index_normalized column to be between 0 and 1.
plt.hist(central_crash_count['index_normalized'], 50, density = True,
histtype ='bar',
color = '#8bb396',
ec = "white",
lw = 0.4)
plt.xlabel('Normalized crash index')
plt.ylabel('Counts')
plt.title('Histogram of the crash index of Central District\nin Philadelphia, 2021',
fontweight ="bold")
Text(0.5, 1.0, 'Histogram of the crash index of Central District\nin Philadelphia, 2021')
You can use GeoPandas to make the plot, coloring the streets by the crash index column.
Tip: if you use the viridis color map, try setting the facecolor of the axes as black for better constrast of the colors.
fig, ax = plt.subplots(figsize = (10,10))
central.plot(ax = ax, color = '#3a3d37')
central_edges.plot(ax = ax, color = '#676e60', linewidth=1)
central_crash_count.plot(column ='index_normalized',
ax = ax,
cmap = 'viridis_r',
linewidth=2
)
ax.set_title('Crash Index of Central District in Philadelphia, 2021')
ax.set_axis_off()
In this part, I will use Folium to make an interactive version of the map from the previous section.
#select the columns I need to plot
cols = ['name', 'index_normalized', 'crash_count', 'geometry']
need_data = central_crash_count[cols]
#drop NA
#if I don't drop the NA, it will return me a error because the ufunc 'isnan' not supported for the input types
need_data = need_data.dropna()
need_data
| name | index_normalized | crash_count | geometry | |
|---|---|---|---|---|
| 14 | Arch Street | 0.321899 | 2.0 | LINESTRING (-75.14847 39.95259, -75.14859 39.9... |
| 15 | North Independence Ml East | 0.325336 | 2.0 | LINESTRING (-75.14832 39.95333, -75.14813 39.9... |
| 16 | North 5th Street | 0.021394 | 1.0 | LINESTRING (-75.14832 39.95333, -75.14822 39.9... |
| 26 | Callowhill Street | 0.311233 | 2.0 | LINESTRING (-75.14724 39.95779, -75.14739 39.9... |
| 27 | Willow Street | 0.311317 | 2.0 | LINESTRING (-75.14714 39.95860, -75.14788 39.9... |
| ... | ... | ... | ... | ... |
| 3880 | North Front Street | 0.332399 | 3.0 | LINESTRING (-75.13626 39.96884, -75.13627 39.9... |
| 3892 | Vine Street Expressway | 1.000000 | 71.0 | LINESTRING (-75.17958 39.95998, -75.17911 39.9... |
| 3894 | North 6th Street | 0.306967 | 1.0 | LINESTRING (-75.14718 39.96702, -75.14729 39.9... |
| 3913 | Spring Garden Street | 0.587106 | 2.0 | LINESTRING (-75.13708 39.96018, -75.13730 39.9... |
| 3917 | South 24th Street | 0.750191 | 25.0 | LINESTRING (-75.17920 39.95268, -75.17922 39.9... |
473 rows × 4 columns
# Pass in GeoJSON data for central crash data
crash_json = need_data.to_json()
# Pass in GeoJSON data for central street networks
central_json = central_edges.to_json()
cmap = plt.get_cmap('viridis_r')
cmap
#define
def get_neighborhood_style(feature):
"""Return a style dict."""
return {"color": "#556158",
"weight": 1.2,
"fillOpacity": 0.2}
def get_highlighted_style(feature):
"""Return a style dict when highlighting a feature."""
return {"weight": 3, "color": "orange"}
def get_style(feature):
"""
Given an input GeoJSON feature, return a style dict.
Notes
-----
The color in the style dict is determined by the
"index_normalized" column in the input "feature".
"""
# Get the data value from the feature
value = feature['properties']['index_normalized']
# Evaluate the color map
# NOTE: value must between 0 and 1
rgb_color = cmap(value) # this is an RGB tuple
# Convert to hex string
color = mcolors.rgb2hex(rgb_color)
# Return the style dictionary
return {'weight': 3,
'color': color,
'fillColor': color,
"fillOpacity": 1}
Folium.GeoJson() function. Then Initialize a Folium map centered on Philadelphia. I choose Cartodb dark_matter to use the viridis color map. At the same time, I will add a GeoJsonTooltip object that includes the street name and crash index value so you can quickly identify which streets have the highest index values.
# STEP 1: Initialize the map
m = folium.Map(
location=[39.955, -75.16],
tiles='Cartodb dark_matter',
zoom_start=13.5
)
# STEP 2: Add the street network GeoJson to the map
folium.GeoJson(
central_json,
style_function=get_neighborhood_style,
tooltip=folium.GeoJsonTooltip(['name'])
).add_to(m)
# STEP 3: Add the crash street GeoJson to the map
folium.GeoJson(
crash_json,
style_function=get_style, # The style function to color counties differently
highlight_function=get_highlighted_style,
tooltip=folium.GeoJsonTooltip(['name','crash_count'])
).add_to(m)
m
In this part, you'll visualize a geospatial data set, queried using an API, using Folium in Python. The choice of data set is up to you, but must satisfy a few requirements:
carto2gpd library to do the querying).requests module in Python or use geopandas directly to read the geojson returned by the API. For example:# API URL
url = "https://phl.carto.com/api/v2/sql"
# Table name on CARTO
table_name = "shootings"
carto2gpd package to get the data.shootings = carto2gpd.get(url, table_name)
.geometry.notnull() function to trim the data frame to those incidents with valid geometries.shootings = shootings.loc[shootings.geometry.notnull()]
len(shootings)
13080
point_y)and longitude(point_x), so I don't need to extract them from the geometery.shootings.head()
| geometry | cartodb_id | objectid | year | dc_key | code | date_ | time | race | sex | ... | offender_injured | offender_deceased | location | latino | point_x | point_y | dist | inside | outside | fatal | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | POINT (-75.14207 40.02063) | 1 | 6156898 | 2022 | 202235049652.0 | 111 | 2022-08-21T00:00:00Z | 15:37:00 | B | F | ... | N | N | 1000 BLOCK W WINGOHOCKEN ST | 0.0 | -75.142068 | 40.020635 | 35 | 0.0 | 1.0 | 1.0 |
| 1 | POINT (-75.15020 40.04224) | 2 | 6156899 | 2022 | 202235063700.0 | 411 | 2022-10-23T00:00:00Z | 05:45:00 | B | M | ... | N | N | 5800 BLOCK OGONTZ AVE | 0.0 | -75.150202 | 40.042240 | 35 | 0.0 | 1.0 | 0.0 |
| 3 | POINT (-75.14464 40.03694) | 4 | 6156901 | 2022 | 202235067083.0 | 411 | 2022-11-07T00:00:00Z | 12:18:00 | B | M | ... | N | N | 5500 BLOCK OLD YORK ROAD | 0.0 | -75.144636 | 40.036944 | 35 | 0.0 | 1.0 | 0.0 |
| 4 | POINT (-75.14608 40.05740) | 5 | 6156902 | 2022 | 202235067614.0 | 411 | 2022-11-09T00:00:00Z | 18:59:00 | B | M | ... | N | N | 1700 BLOCK 68TH AV | 0.0 | -75.146080 | 40.057398 | 35 | 0.0 | 1.0 | 0.0 |
| 5 | POINT (-75.13117 40.03380) | 6 | 6156903 | 2022 | 202235067698.0 | 411 | 2022-11-10T00:00:00Z | 04:43:00 | B | M | ... | N | N | 5400 BLOCK N 5TH ST | 0.0 | -75.131175 | 40.033804 | 35 | 1.0 | 0.0 | 0.0 |
5 rows × 23 columns
# Split out the data
fatal = shootings.query("fatal == 1.0")
nonfatal = shootings.query("fatal == 0.0")
len(fatal)
2662
len(nonfatal)
10310
# Split out the gender
fatal_female = fatal.query("sex == 'F'")
fatal_male = fatal.query("sex == 'M'")
len(fatal_female)
217
len(fatal_male)
2445
values attribute)fatal_coords = fatal[['point_y', 'point_x']].values
nonfatal_coords = nonfatal[['point_y', 'point_x']].values
fatal_female_coords = fatal_female[['point_y', 'point_x']].values
fatal_male_coords = fatal_male[['point_y', 'point_x']].values
The HeatMap takes the list of coordinates: the first column is latitude and the second column longitude
# Initialize map
m = folium.Map(
location=[39.99, -75.13],
tiles='Cartodb Positron',
zoom_start=12
)
# Add heat map coordinates
HeatMap(fatal_coords,
gradient={0.1: 'lime', 0.3: '#c2e32d', 0.5: 'yellow', 0.7: 'orange', 1: '#a82413'},
min_opacity=0.5,
blur=35).add_to(m)
m
# Initialize map
m = folium.Map(
location=[39.99, -75.13],
tiles='Cartodb Positron',
zoom_start=14
)
HeatMap(fatal_female_coords,
gradient={0.1: 'lime', 0.3: '#c2e32d', 0.5: 'yellow', 0.7: 'orange', 1: '#a82413'},
min_opacity=0.5,
blur=30).add_to(m)
m